Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 669
Filter
1.
Estud. pesqui. psicol. (Impr.) ; 23(4): 1486-1505, dez. 2023.
Article in Portuguese | LILACS-Express | LILACS | ID: biblio-1538191

ABSTRACT

O algoritmo digital permitiu o manejo de dados dos usuários da web pelos conglomerados informacionais. De forma discreta e personalizada, a nova forma de governamentalidade coleta, organiza, permuta e devolve os dados ao próprio indivíduo na forma de mais informações. Cada vez mais, esbarra na dimensão singular, tocando o campo do gozo via proliferação de objetos a que, na teoria lacaniana dos discursos, assume a dupla função de perda e de incessante tentativa de suplementação de gozo. Com o incremento informacional, o objeto chega ao ápice social e o digital alcança patamar discursivo. Inserindo-se no mesmo nicho do saber, a informação digital se aproveita da divisão subjetiva, deixando pouco espaço para que o sujeito possa lidar com a entropia de seu gozo via desejo. Se a neguentropia é o atributo do saber que limita a dispersão de gozo, na informação tratada e retornada algoritmicamente tal processo sofre uma aceleração, agindo diretamente sobre a economia dos afetos. Com prejuízo para o sujeito, resta uma experiência de gozo cada vez mais direta, crua, menos mediatizada pelo saber e pelo Outro.


The digital algorithm has allowed the management of data from web users by informational conglomerates. In a discreet and personalized way, the new form of governmentality collects, organizes, exchanges and returns data to the individual in the form of more information. More and more, it comes up against the singular dimension, touching the field of jouissance via the proliferation of objects a which, in the Lacanian theory of discourses, assumes the double function of loss and an incessant attempt to supplement jouissance. With the increase in information, the object reaches the social apex and the digital reaches a discursive level. Inserting itself in the same niche of know [savoir], digital information takes advantage of the subjective division, leaving little space for the subject to deal with the entropy of his jouissance via desire. If negentropy is the attribute of savoir that limits the dispersion of jouissance, in the information processed and returned algorithmically this process is accelerated, acting directly on the economy of affections. To the detriment of the subject, what remains is an experience of jouissance that is increasingly direct, raw, less mediated by savoir and the Other.


El algoritmo digital permitió la gestión de datos de los usuarios de la web por parte de conglomerados informativos. De forma discreta y personalizada, la nueva forma de gubernamentalidad recolecta, organiza, intercambia y devuelve datos al individuo en forma de más información. Cada vez más, choca con la dimensión singular, tocando el campo del goce a través de la proliferación de objetos a que, en la teoría lacaniana de los discursos, asume la doble función de pérdida y de intento incesante de complementar el goce. Con el aumento de la información, el objeto alcanza el ápice social y lo digital alcanza un nivel discursivo. Insertándose en el mismo nicho del conocimiento, la información digital aprovecha la división subjetiva, dejando poco espacio para que el sujeto gestione la entropía de su goce vía deseo. Si la negentropía es el atributo del saber que limita la dispersión del goce, en la información procesada y devuelta algorítmicamente, este proceso se acelera, actuando directamente sobre la economía de los afectos. En detrimento del sujeto, lo que queda es una experiencia de goce cada vez más directa, cruda, menos mediatizada por el saber y el Otro.

2.
Rev. sanid. mil ; 77(3): e04, jul.-sep. 2023. tab, graf
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1536754

ABSTRACT

Resumen Introducción: El síndrome Stevens Johnson (SSJ) es una dermatosis potencialmente fatal caracterizada por una extensa necrosis epidérmica y de mucosas que se acompaña de ataque al estado general, y junto con la necrólisis epidérmica tóxica (NET) se consideran reacciones de hipersensibilidad tipo IV, relacionadas con ciertos fármacos en 60% de los casos, siendo uno de los diagnósticos pocos frecuentes, pero con una alta mortalidad hasta del 40%. Caso clínico: El siguiente caso clínico es un masculino de 34 años de edad que inició un cuadro de eritema generalizado inmediatamente tras la administración del medicamento trimetoprima/sulfametoxazol. Se le solicitó un hemograma mostrando leucocitosis, neutrofilia, VSG elevada, PCR elevada, IgE elevada, y tras el interrogatorio clínico se realiza el algoritmo ALDEN dando positivo con 10 puntos asociado al medicamento previamente dicho. Por lo tanto se le inicia tratamiento con metilprednisolona, difenhidramina, inmunoglobulina humana intravenosa y un plan terapéutico cutáneo, dando como resultado una mejoría clínica, evitando complicaciones y secuelas, hasta el día de su egreso. A manera de conclusión, se requiere un manejo multidisciplinario para atender las manifestaciones clínicas del inmunoglobulina humana intravenosa.


Abstract Introduction: Stevens Johnson Syndrome (SJS) is a potentially fatal dermatosis characterized by extensive epidermal and mucosal necrosis accompanied by an attack on the general condition, which together with Toxic Epidermal Necrolysis (TEN) are considered type IV hypersensitivity reactions, related to certain drugs in 60% of cases, being one of the rare diagnoses, but with a high mortality of up to 40%. Case report: The following clinical case is a 34 year old male who started a generalized erythema picture immediately after administration of the medication trimethoprim/sulfamethoxazole, for which a complete blood count was requested showing leukocytosis, neutrophilia, elevated ESR, elevated PCR, elevated IgE, and after the clinical questioning, the ALDEN algorithm was performed, giving positive with 10 points associated with the previously mentioned medication, for which treatment was started with methylprednisolone, diphenhydramine, intravenous human immunoglobulin and a skin therapeutic plan, resulting in clinical improvement, avoiding complications and sequelae, until the day of discharge. In conclusion, a multidisciplinary management is required to attend to the clinical manifestations of the patient, helping him to a quick and effective recovery.

3.
Rev. medica electron ; 45(4)ago. 2023.
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1515362

ABSTRACT

Introducción: En los últimos años se han producido cambios profundos en el manejo del paciente politraumatizado. A la vez, se han desarrollado nuevos conceptos en relación con las posibles complicaciones, los esquemas de tratamiento y las escalas pronósticas, así como en la identificación de elementos relacionados con su evolución para determinar, de manera precoz, las lesiones que amenazan la vida y precisan de un control quirúrgico inmediato o de intervencionismo radiológico. La tomografía computarizada multicorte y otros estudios imagenológicos permiten la obtención de imágenes de las estructuras corporales por planos, y proporcionan información muy detallada y útil para el diagnóstico; sin embargo, no existe consenso a la hora de indicar uno u otro en el trauma. Objetivo: Elaborar un algoritmo para la indicación eficiente de los estudios imagenológicos en el paciente politraumatizado. Materiales y métodos: Se ejecutó una investigación de desarrollo tecnológico, donde el universo de trabajo estuvo conformado por 43 pacientes con criterio de politrauma -que necesitaron estudios imagenológicos-, ingresados en el Hospital Universitario Clínico Quirúrgico Comandante Faustino Pérez Hernández, en el período comprendido entre marzo de 2020 y marzo de 2021. Resultados: Se elaboró un algoritmo para estandarizar la indicación de estudios imagenológicos en el trauma. Este trabajo considera la seguridad y protección del paciente, y el cuidado de la vida útil del equipo de tomografía axial computarizada. Conclusión: El algoritmo diseñado viabiliza la toma de decisiones respecto al uso de recursos imagenológicos en la atención a los pacientes politraumatizados.


Introduction: In recent years there have been profound changes in the management of poly-traumatized patients. At the same time, new concepts have been developed in relation to possible complications, treatment schemes and prognostic scales, as well as in the identification of elements related with its evolution to early determine the life-threatening lesions and require immediate surgical control or radiological intervention. Multislice computed tomography and other imaging studies allow obtaining images of body structures by planes and provide very detailed and useful information for the diagnosis; nevertheless there is no consensus when it comes to indicating one or the other in trauma. Objective: To develop an algorithm for the efficient indication of imaging study in the poly-traumatized patient. Materials and methods: A technological development research was carried out, where the working universe was made up by 43 patients with polytrauma criteria -who needed imaging studies- admitted to the Clinical Surgical University Hospital Faustino Perez, in the period between March 2020 and March 2021. Results: An algorithm was developed for standardizing the indication of imaging studies in trauma. This work considers the safety and protection of the patient, and the care of the life of the computed axial tomography equipment. Conclusions: The developed algorithm enables decision-making regarding the use of imaging resources in the care of poly-traumatized patients.

4.
Odovtos (En línea) ; 25(2)ago. 2023.
Article in English | LILACS-Express | LILACS | ID: biblio-1448745

ABSTRACT

Three-dimensional cone-beam computed tomography (CBCT) has an important role in the detection of vertical root fractures (VRFs). The effect of artifact generation by high-density objects like dental implants on image quality was well documented. This study aimed to assess the effect of tooth-implant distance and the application of metal artifact reduction (MAR) algorithm on the detection of VRFs on CBCT scans. This study was conducted on 20 endodontically treated single-rooted teeth. VRFs were induced in 10 teeth, while the other 10 remained intact. The implant was inserted in the right second premolar socket area, and two teeth were inserted in right canine and right first premolar sockets area randomly and underwent CBCT with and without the application of MAR algorithm. SPSS 21 was used to analyze the results (alpha=0.05). According to the findings of this study, all four variables of sensitivity, specificity, accuracy, and positive predictive values in diagnosis were higher in cases without MAR software at both close(roots in first premolar sockets) and far distances (roots in canine sockets) from the implant. However, the highest rate of diagnosis accuracy of the first and second radiologists was in the far distance group from the implant without MAR, and the lowest rate of diagnosis accuracy in the first and second radiologists was in the close distance to the implant. Applying MAR algorithm had no positive effect on detection of VRFs on CBCT scans in both close and distant scenarios.


La tomografía computarizada de haz cónico tridimensional (CBCT) tiene un papel importante en la detección de fracturas radiculares verticales (VRF). El efecto de la generación de artefactos por objetos de alta densidad como los implantes dentales en la calidad de la imagen está bien documentado. Este estudio tuvo como objetivo evaluar el efecto de la distancia entre el diente y el implante y la aplicación del algoritmo de reducción de artefactos metálicos (MAR) en la detección de VRF en escaneos CBCT. Este estudio se realizó en 20 dientes uniradiculares tratados endodónticamente. Se indujeron VRF en 10 dientes, mientras que los otros 10 permanecieron intactos. El implante se insertó en el área del alveolo del segundo premolar derecho, y dos dientes se insertaron en el canino derecho y en el área del alvéolo del primer premolar derecho al azar y se sometieron a CBCT con y sin la aplicación del algoritmo MAR. Se utilizó SPSS 21 para analizar los resultados (alfa=0,05). De acuerdo con los hallazgos de este estudio, las cuatro variables de sensibilidad, especificidad, precisión y valores predictivos positivos en el diagnóstico fueron más altas en los casos sin el software MAR tanto en distancias cercanas (raíces en las cavidades de los primeros premolares) como lejanas (raíces en las cavidades de los caninos) del implante. Sin embargo, la tasa más alta de precisión diagnóstica del primer y segundo radiólogo fue en el grupo de mayor distancia al implante sin MAR, y la tasa más baja de precisión diagnóstica en el primer y segundo radiólogo fue en la distancia cercana al implante. La aplicación del algoritmo MAR no tuvo un efecto positivo en la detección de VRF en escaneos CBCT en escenarios cercanos y distantes.

5.
Arch. cardiol. Méx ; 93(2): 164-171, Apr.-Jun. 2023. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1447247

ABSTRACT

Abstract Background: In 1996 Iturralde et al. published an algorithm based on the QRS polarity to determine the location of the accessory pathways (AP), this algorithm was developed before the massive practice of invasive electrophysiology. Purpose: To validate the QRS-Polarity algorithm in a modern cohort of subjects submitted to radiofrequency catheter ablation (RFCA). Our objective was to determinate its global accuracy and its accuracy for parahisian AP. Methods: We conducted a retrospective analysis of patients with Wolff-Parkinson-White (WPW) syndrome who underwent an electrophysiological study (EPS) and RFCA. We employed the QRS-Polarity algorithm to predict the AP anatomical location and we compared this result with the real anatomic location determined in the EPS. To determine accuracy, the Cohen's kappa coefficient (k) and the Pearson correlation coefficient were used. Results: A total of 364 patients were included (mean age 30 years, 57% male). The global k score was 0.78 and the Pearson's coefficient was 0.90. The accuracy for each zone was also evaluated, the best correlation was for the left lateral AP (k of 0.97). There were 26 patients with a parahisian AP, who showed a great variability in the ECG features. Employing the QRS-Polarity algorithm, 34.6% patients had a correct anatomical location, 42.3% had an adjacent location and only 23% an incorrect location. Conclusion: The QRS-Polarity algorithm has a good global accuracy; its precision is high, especially for left lateral AP. This algorithm is also useful for the parahisian AP.


Resumen Antecedentes: En 1996 Iturralde y colaboradores publicaron un algoritmo basado en la polaridad del QRS para determinar la ubicación de las vías accesorias (VA), este algoritmo fue desarrollado antes de la práctica masiva de la electrofisiología invasiva. Objetivo: Validar el algoritmo de la polaridad del QRS en una cohorte moderna de sujetos sometidos a ablación con catéter por radiofrecuencia (ACRF). Nuestro objetivo fue determinar su precisión global y su precisión para las VA parahisianas. Métodos: Realizamos un análisis retrospectivo de pacientes con síndrome de Wolff-Parkinson-White (WPW) a los que se les realizó estudio electrofisiológico (EEF) y ACRF. Empleamos el algoritmo de la polaridad del QRS para predecir la ubicación anatómica de la VA y comparamos este resultado con la ubicación anatómica real determinada en el EEF. Para determinar la precisión se utilizaron el coeficiente kappa de Cohen (k) y el coeficiente de correlación de Pearson. Resultados: Se incluyeron un total de 364 pacientes (edad media 30 años, 57 % varones). La puntuación k global fue de 0,78 y el coeficiente de Pearson de 0,90. También se evaluó la precisión para cada zona, la mejor correlación fue para las VA laterales izquierdas (k de 0.97). Hubo 26 pacientes con VA parahisianas, que mostraron una gran variabilidad en las características del ECG. Empleando el algoritmo de la polaridad del QRS, el 34,6 % de los pacientes tenía una ubicación anatómica correcta, el 42,3 % tenía una ubicación adyacente y solo el 23 % una ubicación incorrecta. Conclusión: El algoritmo de la polaridad del QRS tiene una buena precisión global; su precisión es alta, especialmente para VA lateral izquierdo. Este algoritmo también es útil para la VA parahisiana.

7.
Colomb. med ; 54(1)mar. 2023.
Article in English | LILACS-Express | LILACS | ID: biblio-1534279

ABSTRACT

Background: Pathology reports are stored as unstructured, ungrammatical, fragmented, and abbreviated free text with linguistic variability among pathologists. For this reason, tumor information extraction requires a significant human effort. Recording data in an efficient and high-quality format is essential in implementing and establishing a hospital-based-cancer registry Objective: This study aimed to describe implementing a natural language processing algorithm for oncology pathology reports. Methods: An algorithm was developed to process oncology pathology reports in Spanish to extract 20 medical descriptors. The approach is based on the successive coincidence of regular expressions. Results: The validation was performed with 140 pathological reports. The topography identification was performed manually by humans and the algorithm in all reports. The human identified morphology in 138 reports and by the algorithm in 137. The average fuzzy matching score was 68.3 for Topography and 89.5 for Morphology. Conclusions: A preliminary algorithm validation against human extraction was performed over a small set of reports with satisfactory results. This shows that a regular-expression approach can accurately and precisely extract multiple specimen attributes from free-text Spanish pathology reports. Additionally, we developed a website to facilitate collaborative validation at a larger scale which may be helpful for future research on the subject.


Introducción: Los reportes de patología están almacenados como texto libre sin estructura, gramática, fragmentados o abreviados, con variabilidad lingüística entre patólogos. Por esta razón, la extracción de información de tumores requiere un esfuerzo humano significativo. Almacenar información en un formato eficiente y de alta calidad es esencial para implementar y establecer un registro hospitalario de cáncer. Objetivo: Este estudio busca describir la implementación de un algoritmo de Procesamiento de Lenguaje Natural para reportes de patología oncológica. Métodos: Desarrollamos un algoritmo para procesar reportes de patología oncológica en Español, con el objetivo de extraer 20 descriptores médicos. El abordaje se basa en la coincidencia sucesiva de expresiones regulares. Resultados: La validación se hizo con 140 reportes de patología. La identificación topográfica se realizó por humanos y por el algoritmo en todos los reportes. La morfología fue identificada por humanos en 138 reportes y por el algoritmo en 137. El valor de coincidencias parciales (fuzzy matches) promedio fue de 68.3 para Topografía y 89.5 para Morfología. Conclusiones: Se hizo una validación preliminar del algoritmo contra extracción humana sobre un pequeño grupo de reportes, con resultados satisfactorios. Esto muestra que múltiples atributos del espécimen pueden ser extraídos de manera precisa de texto libre de reportes de patología en Español, usando un abordaje de expresiones regulares. Adicionalmente, desarrollamos una página web para facilitar la validación colaborativa a gran escala, lo que puede ser beneficioso para futuras investigaciones en el tema.

8.
Chinese Journal of Emergency Medicine ; (12): 606-611, 2023.
Article in Chinese | WPRIM | ID: wpr-989829

ABSTRACT

Objective:To establish a blood consumption prediction model for emergency trauma patients based on machine learning algorithm, so as to guide blood collection and blood supply institutions to prepare for the early blood demand of mass casualties in public emergencies.Methods:A retrospective analysis was conducted on trauma patients in the emergency system database of 12 hospitals in Zhejiang Province from January 2018 to December 2020. Patients with chronic medical history such as hematological diseases and tumors, and transferred from other hospitals after external treatment were excluded. The patients were divided into the transfusion group and non-transfusion group according to whether they received blood transfusion. The differences in demographic and clinical characteristics between the two groups were compared, and the computer learning algorithm (XGBoost) was used to build the blood consumption prediction model and blood consumption volume prediction model of emergency trauma patients.Results:Totally 2025 patients were included in this study, including 1146 patients in the transfusion group and 879 patients in the non-transfusion group. The blood demand of emergency trauma patients mainly occurred within 3 days of admission (60%). The main variables affecting the blood consumption prediction model of emergency trauma patients were shock index, hematocrit, systolic blood pressure, abdominal injury, pelvic injury, ascites and hemoglobin. Compared with the traditional prediction model, XGBoost model had the highest hit rate of 59.0%. The accuracy of blood consumption prediction model was the highest when seven levels of blood volume were adopted, and the deviation fluctuated between [0~1] U. According to the prediction model, the blood consumption prediction formula was∑ nw× c. Conclusions:The preliminarily constructed prediction model of blood transfusion and blood consumption for emergency trauma patients has better performance than the traditional prediction model of blood transfusion, which provides reference for optimizing the decision-making ability of blood demand assessment of hospitals and blood supply institutions under public emergencies.

9.
Journal of Environmental and Occupational Medicine ; (12): 1033-1038, 2023.
Article in Chinese | WPRIM | ID: wpr-988745

ABSTRACT

Background With the increasing exposure to hazardous chemicals in the workplace and frequency of occupational injuries and occupational safety accidents, the acquisition of occupational exposure limits of hazardous chemicals is imminent. Objective To obtain more unknown immediately dangerous to life or health (IDLH) concentrations of hazardous chemicals in the workplace by exploring the application of quantitative structure-activity relationship (QSAR) prediction method to IDLH concentrations, and to provide a theoretical basis and technical support for the assessment and prevention of occupational injuries. Methods QSAR was used to correlate the IDLH values of 50 benzene and its derivatives with the molecular structures of target compounds. Firstly, affinity propagation algorithm was applied to cluster sample sets. Secondly, Dragon 2.1 software was used to calculate and pre-screen 537 molecular descriptors. Thirdly, the genetic algorithm was used to select six characteristic molecular descriptors as dependent variables and to construct a multiple linear regression model (MLR) and two nonlinear models using support vector machine (SVM) and artificial neural network (ANN) respectively. Finally, model performance was evaluated by internal and external validation and Williams diagram was drawn to determine the scopes of selected models. Results The ANN model results showed that \begin{document}$ {R}_{\mathrm{t}\mathrm{r}\mathrm{a}\mathrm{i}\mathrm{n}}

10.
Journal of Medical Biomechanics ; (6): E346-E352, 2023.
Article in Chinese | WPRIM | ID: wpr-987957

ABSTRACT

Objective To investigate the effect of different optimization algorithms on accurate reconstruction of traffic accidents. Methods Non-dominated sorting genetic algorithm-II ( NSGA-II), neighborhood cultivation genetic algorithm (NCGA) and multi-objective particle swarm optimization (MOPSO) were used to optimize the multi-rigid body dynamic reconstruction of a real case. The effects of different optimization algorithms on convergence speed and optimal approximate solution were studied. The optimal initial impact parameters were simulated as boundary conditions of finite element method, and the simulated results were compared with the actual injuries. Results NCGA had a faster convergence speed and a better result in optimization process. The kinematic response of pedestrian vehicle collision reconstructed by the optimal approximate solution was consistent with the surveillance video. The prediction of craniocerebral injury was basically consistent with the cadaver examination. Conclusions The combination of optimization algorithm, rigid multibody and finite element method can complete the accurate reconstruction of traffic accidents and reduce the influence of human factors.

11.
Journal of Sun Yat-sen University(Medical Sciences) ; (6): 310-317, 2023.
Article in Chinese | WPRIM | ID: wpr-965847

ABSTRACT

ObjectiveTo investigate the prevention strategy of bilateral vocal cord adhesion after simultaneous Renke space edema resection under CO2 laser. MethodsSeventy patients who underwent CO2 laser resection of bilateral Renke space edema of vocal cords from June 2018 to June 2021 in our hospital were retrospectively selected for this study. According to their postoperative vocal cord adhesion, patients were divided into vocal cord adhesion group (35 cases) and silent band adhesion group (35 cases), and the general data of the two groups were compared. Multivariate logistic regression analysis was used to evaluate the risk factors for postoperative vocal cord adhesion. The prediction model of postoperative morbidity risk of vocal cord adhesion was established by using chisquared automatic interaction detection (CHAID) classification tree algorithm, and the application value of the model was evaluated by benefit graph and index graph. ResultsMultivariate analysis showed that surgical range and depth of Ⅱ, laser power≥5 W and anterior connection involvement were the risk factors for postoperative vocal cord adhesion [OR 95%CI: 6.113 (2.346, 17.451); 5.214 (1.469, 15.263); 18.651 (1.689, 36.203)]. The classification tree model showed that anterior articulation involvement was an important predictor of postoperative vocal cord adhesion (76.92%; χ2=11.993, P=0.001), and the benefit graph and index graph showed good models. ConclusionClinical attention should be paid to surgical scope and depth, laser power and anterior union involvement, and timely prevention strategies should be formulated to reduce the risk of vocal cord adhesion in patients.

12.
Journal of Zhejiang University. Medical sciences ; (6): 243-248, 2023.
Article in English | WPRIM | ID: wpr-982041

ABSTRACT

The application of artificial neural network algorithm in pathological diagnosis of gastrointestinal malignant tumors has become a research hotspot. In the previous studies, the algorithm research mainly focused on the model development based on convolutional neural networks, while only a few studies used the combination of convolutional neural networks and recurrent neural networks. The research contents included classical histopathological diagnosis and molecular typing of malignant tumors, and the prediction of patient prognosis by utilizing artificial neural networks. This article reviews the research progress on artificial neural network algorithm in the pathological diagnosis and prognosis prediction of digestive tract malignant tumors.


Subject(s)
Humans , Neural Networks, Computer , Algorithms , Prognosis , Gastrointestinal Neoplasms/diagnosis
13.
Chinese Journal of Traumatology ; (6): 211-216, 2023.
Article in English | WPRIM | ID: wpr-981918

ABSTRACT

PURPOSE@#Non-prosthetic peri-implant fractures are challenging injuries. Multiple factors must be carefully evaluated for an adequate therapeutic strategy, such as the state of bone healing, the type of implant, the time and performed personnel of previous surgery, and the stability of fixation. The aim of this study is to propose a rationale for the treatment.@*METHODS@#The peri-implant femoral fractures (PIFFs) system, a therapeutic algorithm was developed for the management of all patients presenting a subtype A PIFF, based on the type of the original implant (extra- vs. intra-medullary), implant length and fracture location. The adequacy and reliability of the proposed algorithm and the fracture healing process were assessed at the last clinical follow-up using the Parker mobility score and radiological assessment, respectively. In addition, all complications were noticed. Continuous variables were expressed as mean and standard deviation, or median and range according to their distribution. Categorical variables were expressed as frequency and percentages.@*RESULTS@#This is a retrospective case series of 33 PIFFs, and the mean post-operative Parker mobility score was (5.60 ± 2.54) points. Five patients (15.1%) achieved complete mobility without aids (9 points) and 1 (3.0%) patient was not able to walk. Two other patients (6.1%) were non-ambulatory prior to PPIF. The mean follow-up was (21.51 ± 9.12) months (range 6 - 48 months). There were 7 (21.2%) complications equally distributed between patients managed either with nailing or plating. There were no cases of nonunion or mechanical failure of the original implant.@*CONCLUSION@#The proposed treatment algorithm shows adequate, reliable and straightforward to assist the orthopaedic trauma surgeon on the difficult decision-making process regarding the management of PIFF occurring in previously healed fractures. In addition, it may become a useful tool to optimize the use of the classification, thus potentially improving the outcomes and minimizing complications.


Subject(s)
Humans , Periprosthetic Fractures/surgery , Retrospective Studies , Femoral Fractures/surgery , Reproducibility of Results , Fracture Fixation, Internal , Fracture Healing , Treatment Outcome
14.
Journal of Biomedical Engineering ; (6): 529-535, 2023.
Article in Chinese | WPRIM | ID: wpr-981572

ABSTRACT

As one of the standard electrophysiological signals in the human body, the photoplethysmography contains detailed information about the blood microcirculation and has been commonly used in various medical scenarios, where the accurate detection of the pulse waveform and quantification of its morphological characteristics are essential steps. In this paper, a modular pulse wave preprocessing and analysis system is developed based on the principles of design patterns. The system designs each part of the preprocessing and analysis process as independent functional modules to be compatible and reusable. In addition, the detection process of the pulse waveform is improved, and a new waveform detection algorithm composed of screening-checking-deciding is proposed. It is verified that the algorithm has a practical design for each module, high accuracy of waveform recognition and high anti-interference capability. The modular pulse wave preprocessing and analysis software system developed in this paper can meet the individual preprocessing requirements for various pulse wave application studies under different platforms. The proposed novel algorithm with high accuracy also provides a new idea for the pulse wave analysis process.


Subject(s)
Humans , Systems Analysis , Algorithms , Software , Heart Rate , Microcirculation
15.
Journal of Biomedical Engineering ; (6): 335-342, 2023.
Article in Chinese | WPRIM | ID: wpr-981547

ABSTRACT

When performing eye movement pattern classification for different tasks, support vector machines are greatly affected by parameters. To address this problem, we propose an algorithm based on the improved whale algorithm to optimize support vector machines to enhance the performance of eye movement data classification. According to the characteristics of eye movement data, this study first extracts 57 features related to fixation and saccade, then uses the ReliefF algorithm for feature selection. To address the problems of low convergence accuracy and easy falling into local minima of the whale algorithm, we introduce inertia weights to balance local search and global search to accelerate the convergence speed of the algorithm and also use the differential variation strategy to increase individual diversity to jump out of local optimum. In this paper, experiments are conducted on eight test functions, and the results show that the improved whale algorithm has the best convergence accuracy and convergence speed. Finally, this paper applies the optimized support vector machine model of the improved whale algorithm to the task of classifying eye movement data in autism, and the experimental results on the public dataset show that the accuracy of the eye movement data classification of this paper is greatly improved compared with that of the traditional support vector machine method. Compared with the standard whale algorithm and other optimization algorithms, the optimized model proposed in this paper has higher recognition accuracy and provides a new idea and method for eye movement pattern recognition. In the future, eye movement data can be obtained by combining it with eye trackers to assist in medical diagnosis.


Subject(s)
Animals , Support Vector Machine , Whales , Eye Movements , Algorithms
16.
Journal of Central South University(Medical Sciences) ; (12): 84-91, 2023.
Article in English | WPRIM | ID: wpr-971373

ABSTRACT

OBJECTIVES@#Firefighters are prone to suffer from psychological trauma and post-traumatic stress disorder (PTSD) in the workplace, and have a poor prognosis after PTSD. Reliable models for predicting PTSD allow for effective identification and intervention for patients with early PTSD. By collecting the psychological traits, psychological states and work situations of firefighters, this study aims to develop a machine learning algorithm with the aim of effectively and accurately identifying the onset of PTSD in firefighters, as well as detecting some important predictors of PTSD onset.@*METHODS@#This study conducted a cross-sectional survey through convenient sampling of firefighters from 20 fire brigades in Changsha, which were evenly distributed across 6 districts and Changsha County, with a total of 628 firefighters. We used the synthetic minority oversampling technique (SMOTE) to process data sets and used grid search to finish the parameter tuning. The predictive capability of several commonly used machine learning models was compared by 5-fold cross-validation and using the area under the receiver operating characteristic curve (ROC-AUC), accuracy, precision, recall, and F1 score.@*RESULTS@#The random forest model achieved good performance in predicting PTSD with an average AUC score at 0.790. The mean accuracy of the model was 90.1%, with an F1 score of 0.945. The three most important predictors were perseverance, forced thinking, and reflective deep thinking, with weights of 0.165, 0.158, and 0.152, respectively. The next most important predictors were employment time, psychological power, and optimism.@*CONCLUSIONS@#PTSD onset prediction model for Changsha firefighters constructed by random forest has strong predictive ability, and both psychological characteristics and work situation can be used as predictors of PTSD onset risk for firefighters. In the next step of the study, validation using other large datasets is needed to ensure that the predictive models can be used in clinical setting.


Subject(s)
Humans , Stress Disorders, Post-Traumatic/diagnosis , Firefighters/psychology , Cross-Sectional Studies , Algorithms , Machine Learning
17.
Chinese Journal of Medical Instrumentation ; (6): 47-53, 2023.
Article in Chinese | WPRIM | ID: wpr-971302

ABSTRACT

OBJECTIVE@#Current mainstream PET scattering correction methods are introduced and evaluated horizontally, and finally, the existing problems and development direction of scattering correction are discussed.@*METHODS@#Based on NeuWise Pro PET/CT products of Neusoft Medical System Co. Ltd. , the simulation experiment is carried out to evaluate the influence of radionuclide distribution out of FOV (field of view) on the scattering estimation accuracy of each method.@*RESULTS@#The scattering events produced by radionuclide out of FOV have an obvious impact on the spatial distribution of scattering, which should be considered in the model. The scattering estimation accuracy of Monte Carlo method is higher than single scatter simulation (SSS).@*CONCLUSIONS@#Clinically, if the activity of the adjacent parts out of the FOV is high, such as brain, liver, kidney and bladder, it is likely to lead to the deviation of scattering estimation. Considering the Monte Carlo scattering estimation of the distribution of radionuclide out of FOV, it's helpful to improve the accuracy of scattering distribution estimation.


Subject(s)
Positron Emission Tomography Computed Tomography , Scattering, Radiation , Computer Simulation , Brain , Monte Carlo Method , Phantoms, Imaging , Image Processing, Computer-Assisted
18.
China Journal of Chinese Materia Medica ; (24): 1132-1136, 2023.
Article in Chinese | WPRIM | ID: wpr-970585

ABSTRACT

In observational studies, herbal prescriptions are usually studied in the form of "similar prescriptions". At present, the classification of prescriptions is mainly based on clinical experience judgment, but there are some problems in manual judgment, such as lack of unified criteria, labor consumption, and difficulty in verification. In the construction of a database of integrated traditional Chinese and western medicine for the treatment of coronavirus disease 2019(COVID-19), our research group tried to classify real-world herbal prescriptions using a similarity matching algorithm. The main steps include 78 target prescriptions are determined in advance; four levels of importance labeling shall be carried out for the drugs of each target prescription; the combination, format conversion, and standardization of drug names of the prescriptions to be identified in the herbal medicine database; calculate the similarity between the prescriptions to be identified and each target prescription one by one; prescription discrimination is performed based on the preset criteria; remove the name of the prescriptions with "large prescriptions cover the small". Through the similarity matching algorithm, 87.49% of the real prescriptions in the herbal medicine database of this study can be identified, which preliminarily proves that this method can complete the classification of herbal prescriptions. However, this method does not consider the influence of herbal dosage on the results, and there is no recognized standard for the weight of drug importance and criteria, so there are some limitations, which need to be further explored and improved in future research.


Subject(s)
Humans , COVID-19 , Algorithms , Databases, Factual , Prescriptions , Plant Extracts
19.
Rev. cuba. pediatr ; 952023. ilus, tab
Article in Spanish | LILACS, CUMED | ID: biblio-1515282

ABSTRACT

Introducción: La inflamación de la pleura desencadenada por bacterias y mediada por citocinas, aumenta la permeabilidad vascular y produce vasodilatación, lo cual genera desequilibrio entre la producción de líquido pleural y su capacidad de reabsorción por eficientes mecanismos fisiológicos. La condición anterior conduce al desarrollo de derrame pleural paraneumónico. Objetivo: Exponer la importancia de la correlación fisiopatológica y diagnóstica con los pilares fundamentales de actuación terapéutica en el derrame pleural paraneumónico. Métodos: Revisión en PubMed y Google Scholar de artículos publicados hasta abril de 2021 que abordaran el derrame pleural paraneumónico, su fisiopatología, elementos diagnósticos, tanto clínicos como resultados del estudio del líquido pleural, pruebas de imágenes, y estrategias terapéuticas. Análisis y síntesis de la información: El progreso de una infección pulmonar y la producción de una invasión de gérmenes al espacio pleural favorece la activación de mecanismos que conllevan al acúmulo de fluido, depósito de fibrina y formación de septos. Este proceso patológico se traduce en manifestaciones clínicas, cambios en los valores citoquímicos y resultados microbiológicos en el líquido pleural, que acompañados de signos radiológicos y ecográficos en el tórax, guían la aplicación oportuna de los pilares de tratamiento del derrame pleural paraneumónico. Conclusiones: Ante un derrame pleural paraneumónico, con tabiques o partículas en suspensión en la ecografía de tórax, hallazgo de fibrina, líquido turbio o pus en el proceder de colocación del drenaje de tórax, resulta necesario iniciar fibrinólisis intrapleural. Cuando el tratamiento con fibrinolíticos intrapleurales falla, la cirugía video-toracoscópica es el procedimiento quirúrgico de elección(AU)


Introduction: The inflammation of the pleura triggered by bacteria and mediated by cytokines, increases vascular permeability and produces vasodilation, which generates imbalance between the production of pleural fluid and its resorption capacity by efficient physiological mechanisms. The above condition leads to the development of parapneumonic pleural effusion. Objective: To expose the importance of the pathophysiological and diagnostic correlation with the fundamental pillars of therapeutic action in parapneumonic pleural effusion. Methods: Review in PubMed and Google Scholar of articles published until April 2021 that addressed parapneumonic pleural effusion, its pathophysiology, diagnostic elements, both clinical and results of the pleural fluid study, imaging tests, and therapeutic strategies. Analysis and synthesis of information: The progress of a lung infection and the production of an invasion of germs into the pleural space favors the activation of mechanisms that lead to the accumulation of fluid, fibrin deposition and formation of septa. This pathological process results in clinical manifestations, changes in cytochemical values and microbiological results in the pleural fluid, which accompanied by radiological and ultrasound signs in the chest, guide the timely application of the pillars of treatment of parapneumonic pleural effusion. Conclusions: In the event of a parapneumonic pleural effusion, with septums or particles in suspension on chest ultrasound, finding fibrin, turbid fluid or pus in the procedure of placement of the chest drain, it is necessary to initiate intrapleural fibrinolytic. When treatment with intrapleural fibrinolytics fails, video-thoracoscopic surgery is the surgical procedure of choice(AU)


Subject(s)
Humans , Pleural Effusion/classification , Pleural Effusion/physiopathology , Pleural Effusion/drug therapy , Pleural Effusion/diagnostic imaging , Drainage/instrumentation , Anti-Bacterial Agents
20.
Afr. j. lab. med. (Online) ; 12(1): 1-4, 2023. figures
Article in English | AIM | ID: biblio-1413499

ABSTRACT

Introduction: Determining the HIV status of some individuals remains challenging due to multidimensional factors such as flaws in diagnostic systems, technological challenges, and viral diversity. This report pinpoints challenges faced by the HIV testing system in Cameroon. Case presentation: A 53-year-old male received a positive HIV result by a rapid testing algorithm in July 2016. Not convinced of his HIV status, he requested additional tests. In February 2017, he received a positive result using ImmunoComb® II HIV 1 & 2 BiSpot and Roche cobas electrochemiluminescence assays. A sample sent to France in April 2017 was positive on the Bio-Rad GenScreen™ HIV 1/2, but serotyping was indeterminate, and viral load was < 20 copies/mL. The Roche electrochemiluminescence immunoassay and INNO-LIA HIV I/II Score were negative for samples collected in 2018. A sample collected in July 2019 and tested with VIDAS® HIV Duo Ultra enzyme-linked fluorescent assay and Geenius™ HIV 1/2 Confirmatory Assay was positive, but negative with Western blot; CD4 count was 1380 cells/mm3 and HIV proviral DNA tested in France was 'target-not-detected'. Some rapid tests were still positive in 2020 and 2021. Serotyping remained indeterminate, and viral load was 'target-not-detected'. There were no self-reported exposure to HIV risk factors, and his wife was HIV-seronegative.Management and outcome: Given that the patient remained asymptomatic with no evidence of viral replication, no antiretroviral therapy was initiated. Conclusion: This case highlights the struggles faced by some individuals in confirming their HIV status and the need to update existing technologies and develop an algorithm for managing exceptional cases.

SELECTION OF CITATIONS
SEARCH DETAIL